对于人工智能系统来说,在低计算成本的情况下实现准确的视频识别是一项挑战。基于自适应推理的有效视频识别方法通常会预览视频,并专注于显着零件以降低计算成本。大多数现有作品都集中在复杂的网络学习,并具有基于视频分类的目标。以所有框架为正样本,其中很少有人关注积极样本(显着框架)和负面样本(非空位框架)之间的歧视。为了填补这一空白,在本文中,我们提出了一个新型的非高度抑制网络(NSNET),该网络有效地抑制了非征力框架的响应。具体而言,在框架级别上,可以生成可以区分显着框架和非空位框架的有效伪标签,以指导框架显着性学习。在视频层面上,在双重视频级别的监督下都学会了一个时间关注模块,这些模块既是对突出表示和非偏心表示形式。从两个两个级别的显着度测量都合并以利用多粒性互补信息。在四个众所周知的基准上进行的广泛实验验证了我们的NSNET不仅实现了最先进的准确性效率折衷,而且比最先进的推理速度要快得多(2.4〜4.3倍) - 艺术方法。我们的项目页面位于https://lawrencexia2008.github.io/projects/nsnet。
translated by 谷歌翻译
这项研究提出了一种分布式算法,该算法通过自动决策,平滑的羊群和分布良好的捕获来使代理的自适应分组捕获多个目标。代理商根据环境信息做出自己的决定。提出了一种改进的人工潜在方法,以使代理能够平稳自然地改变形成以适应环境。拟议的策略确保了群体的协调发展在群体上陷入多个目标的现象。我们使用仿真实验和设计指标来验证提出方法的性能,以分析这些模拟和物理实验。
translated by 谷歌翻译
软致动器在符合性和形态方面表现出具有很大的优势,用于操纵细腻物体和在密闭空间中的检查。对于可以提供扭转运动的软致动器有一个未满足的需要。放大工作空间并增加自由度。为此目标,我们呈现由硅胶制成的折纸启发的软充气执行器(OSPas)。原型可以输出多于一个旋转的旋转(高达435 {\ DEG}),比以前的同行更大。我们描述了设计和制作方法,构建了运动学模型和仿真模型,并分析和优化参数。最后,我们通过整合到能够同时抓住和提升脆弱或扁平物体的夹具,这是一种能够与扭转致动器的直角拾取和放置物品的多功能机器人,以及柔软的蛇通过扭转致动器的扭转能够改变姿态和方向的机器人。
translated by 谷歌翻译
Photo-realistic style transfer aims at migrating the artistic style from an exemplar style image to a content image, producing a result image without spatial distortions or unrealistic artifacts. Impressive results have been achieved by recent deep models. However, deep neural network based methods are too expensive to run in real-time. Meanwhile, bilateral grid based methods are much faster but still contain artifacts like overexposure. In this work, we propose the \textbf{Adaptive ColorMLP (AdaCM)}, an effective and efficient framework for universal photo-realistic style transfer. First, we find the complex non-linear color mapping between input and target domain can be efficiently modeled by a small multi-layer perceptron (ColorMLP) model. Then, in \textbf{AdaCM}, we adopt a CNN encoder to adaptively predict all parameters for the ColorMLP conditioned on each input content and style image pair. Experimental results demonstrate that AdaCM can generate vivid and high-quality stylization results. Meanwhile, our AdaCM is ultrafast and can process a 4K resolution image in 6ms on one V100 GPU.
translated by 谷歌翻译
The main challenge for fine-grained few-shot image classification is to learn feature representations with higher inter-class and lower intra-class variations, with a mere few labelled samples. Conventional few-shot learning methods however cannot be naively adopted for this fine-grained setting -- a quick pilot study reveals that they in fact push for the opposite (i.e., lower inter-class variations and higher intra-class variations). To alleviate this problem, prior works predominately use a support set to reconstruct the query image and then utilize metric learning to determine its category. Upon careful inspection, we further reveal that such unidirectional reconstruction methods only help to increase inter-class variations and are not effective in tackling intra-class variations. In this paper, we for the first time introduce a bi-reconstruction mechanism that can simultaneously accommodate for inter-class and intra-class variations. In addition to using the support set to reconstruct the query set for increasing inter-class variations, we further use the query set to reconstruct the support set for reducing intra-class variations. This design effectively helps the model to explore more subtle and discriminative features which is key for the fine-grained problem in hand. Furthermore, we also construct a self-reconstruction module to work alongside the bi-directional module to make the features even more discriminative. Experimental results on three widely used fine-grained image classification datasets consistently show considerable improvements compared with other methods. Codes are available at: https://github.com/PRIS-CV/Bi-FRN.
translated by 谷歌翻译
Reference-based image super-resolution (RefSR) is a promising SR branch and has shown great potential in overcoming the limitations of single image super-resolution. While previous state-of-the-art RefSR methods mainly focus on improving the efficacy and robustness of reference feature transfer, it is generally overlooked that a well reconstructed SR image should enable better SR reconstruction for its similar LR images when it is referred to as. Therefore, in this work, we propose a reciprocal learning framework that can appropriately leverage such a fact to reinforce the learning of a RefSR network. Besides, we deliberately design a progressive feature alignment and selection module for further improving the RefSR task. The newly proposed module aligns reference-input images at multi-scale feature spaces and performs reference-aware feature selection in a progressive manner, thus more precise reference features can be transferred into the input features and the network capability is enhanced. Our reciprocal learning paradigm is model-agnostic and it can be applied to arbitrary RefSR models. We empirically show that multiple recent state-of-the-art RefSR models can be consistently improved with our reciprocal learning paradigm. Furthermore, our proposed model together with the reciprocal learning strategy sets new state-of-the-art performances on multiple benchmarks.
translated by 谷歌翻译
使用具有固定尺度的图像超分辨率(SR)的深度学习技术,已经取得了巨大的成功。为了提高其现实世界的适用性,还提出了许多模型来恢复具有任意尺度因子的SR图像,包括不对称的图像,其中图像沿水平和垂直方向大小为不同的尺度。尽管大多数模型仅针对单向上升尺度任务进行了优化,同时假设针对低分辨率(LR)输入的预定义的缩小内核,但基于可逆神经网络(INN)的最新模型能够通过优化降低和降低尺度和降低范围的降低准确性来显着提高上升的准确性共同。但是,受创新体系结构的限制,它被限制在固定的整数尺度因素上,并且需要每个量表的一个模型。在不增加模型复杂性的情况下,提出了一个简单有效的可逆重新恢复网络(IARN),以通过在这项工作中仅训练一个模型来实现任意图像重新缩放。使用创新的组件,例如位置感知量表编码和先发制通道拆分,该网络被优化,以将不可固化的重新恢复周期转换为有效的可逆过程。证明它可以在双向任意重新缩放中实现最新的(SOTA)性能,而不会在LR输出中损害感知质量。还可以证明,使用相同的网络体系结构在不对称尺度的测试上表现良好。
translated by 谷歌翻译
本文回顾了AIM 2022上压缩图像和视频超级分辨率的挑战。这项挑战包括两条曲目。轨道1的目标是压缩图像的超分辨率,轨迹〜2靶向压缩视频的超分辨率。在轨道1中,我们使用流行的数据集DIV2K作为培训,验证和测试集。在轨道2中,我们提出了LDV 3.0数据集,其中包含365个视频,包括LDV 2.0数据集(335个视频)和30个其他视频。在这一挑战中,有12支球队和2支球队分别提交了赛道1和赛道2的最终结果。所提出的方法和解决方案衡量了压缩图像和视频上超分辨率的最先进。提出的LDV 3.0数据集可在https://github.com/renyang-home/ldv_dataset上找到。此挑战的首页是在https://github.com/renyang-home/aim22_compresssr。
translated by 谷歌翻译
图像文本检索(ITR)在桥接视觉和舌形式方面具有挑战性。对比度学习已被大多数先前的艺术所采用。除了有限的负面图像文本对外,约束学习的能力受到手动加权负对以及对外部知识的不认识的限制。在本文中,我们提出了新型耦合多样性敏感的动量约束学习(编码器),以改善跨模式表示。首先,发明了一种新颖的多样性对比度学习(DCL)体系结构。我们引入了两种模式的动态词典,以扩大图像文本对的比例,并且通过自适应负面对加权实现多样性敏感性。此外,编码器设计了两个分支。一个人从图像/文本中学习实例级的嵌入式,它还基于其嵌入为其输入图像/文本生成伪在线聚类标签。同时,另一个分支学会从常识知识图中查询以形成两种模式的概念级描述符。之后,两个分支都利用DCL来对齐跨模式嵌入空间,而额外的伪聚类标签预测损失则用于促进第二个分支的概念级表示学习。在两个流行的基准测试(即Mscoco和Flicker30k)上进行的广泛实验,验证编码器的表现明显优于最先进的方法。
translated by 谷歌翻译
Video-Text检索(VTR)是多模式理解的一项有吸引力但具有挑战性的任务,该任务旨在在给定查询(视频)的情况下搜索相关的视频(文本)。现有方法通常采用完全异构的视觉文本信息来对齐视频和文本,同时缺乏对这两种模式中均匀的高级语义信息的认识。为了填补这一差距,在这项工作中,我们提出了一个新颖的视觉语言对准模型,名为VTR Hise,该模型通过合并显式高级语义来改善跨模式的表示。首先,我们探讨了显式高级语义的层次结构属性,并将其进一步分为两个级别,即离散的语义和整体语义。具体来说,对于视觉分支,我们利用了现成的语义实体预测器来生成离散的高级语义。同时,采用训练有素的视频字幕模型来输出整体高级语义。至于文本方式,我们将文本分为三个部分,包括发生,动作和实体。特别是,这种情况对应于整体高级语义,同时动作和实体代表离散的语义。然后,利用不同的图推理技术来促进整体和离散的高级语义之间的相互作用。广泛的实验表明,借助明确的高级语义,我们的方法在包括MSR-VTT,MSVD和DIDEMO在内的三个基准数据集上实现了优于最先进方法的卓越性能。
translated by 谷歌翻译